Nowadays, time-stamped web documents related to a general news query floods spread throughout the Internet, and timeline summarization targets concisely summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this paper, we propose a Unified Timeline Summarizer (UTS) that can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information remained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that UTS achieves state-of-the-art performance in terms of both automatic and human evaluations.
translated by 谷歌翻译
Graph neural networks (GNNs) have received remarkable success in link prediction (GNNLP) tasks. Existing efforts first predefine the subgraph for the whole dataset and then apply GNNs to encode edge representations by leveraging the neighborhood structure induced by the fixed subgraph. The prominence of GNNLP methods significantly relies on the adhoc subgraph. Since node connectivity in real-world graphs is complex, one shared subgraph is limited for all edges. Thus, the choices of subgraphs should be personalized to different edges. However, performing personalized subgraph selection is nontrivial since the potential selection space grows exponentially to the scale of edges. Besides, the inference edges are not available during training in link prediction scenarios, so the selection process needs to be inductive. To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP. PS2 is instantiated as a bi-level optimization problem that can be efficiently solved differently. Coupling GNNLP models with PS2, we suggest a brand-new angle towards GNNLP training: by first identifying the optimal subgraphs for edges; and then focusing on training the inference model by using the sampled subgraphs. Comprehensive experiments endorse the effectiveness of our proposed method across various GNNLP backbones (GCN, GraphSage, NGCF, LightGCN, and SEAL) and diverse benchmarks (Planetoid, OGB, and Recommendation datasets). Our code is publicly available at \url{https://github.com/qiaoyu-tan/PS2}
translated by 谷歌翻译
In a citation graph, adjacent paper nodes share related scientific terms and topics. The graph thus conveys unique structure information of document-level relatedness that can be utilized in the paper summarization task, for exploring beyond the intra-document information. In this work, we focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings. We first propose a Multi-granularity Unsupervised Summarization model (MUS) as a simple and low-cost solution to the task. MUS finetunes a pre-trained encoder model on the citation graph by link prediction tasks. Then, the abstract sentences are extracted from the corresponding paper considering multi-granularity information. Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework. Motivated by this, we next propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available. Apart from employing the link prediction as an auxiliary task, GSS introduces a gated sentence encoder and a graph information fusion module to take advantage of the graph information to polish the sentence representation. Experiments on a public benchmark dataset show that MUS and GSS bring substantial improvements over the prior state-of-the-art model.
translated by 谷歌翻译
Retrieval-augmented Neural Machine Translation models have been successful in many translation scenarios. Different from previous works that make use of mutually similar but redundant translation memories~(TMs), we propose a new retrieval-augmented NMT to model contrastively retrieved translation memories that are holistically similar to the source sentence while individually contrastive to each other providing maximal information gains in three phases. First, in TM retrieval phase, we adopt a contrastive retrieval algorithm to avoid redundancy and uninformativeness of similar translation pieces. Second, in memory encoding stage, given a set of TMs we propose a novel Hierarchical Group Attention module to gather both local context of each TM and global context of the whole TM set. Finally, in training phase, a Multi-TM contrastive learning objective is introduced to learn salient feature of each TM with respect to target sentence. Experimental results show that our framework obtains improvements over strong baselines on the benchmark datasets.
translated by 谷歌翻译
Regression trees are one of the oldest forms of AI models, and their predictions can be made without a calculator, which makes them broadly useful, particularly for high-stakes applications. Within the large literature on regression trees, there has been little effort towards full provable optimization, mainly due to the computational hardness of the problem. This work proposes a dynamic-programming-with-bounds approach to the construction of provably-optimal sparse regression trees. We leverage a novel lower bound based on an optimal solution to the k-Means clustering algorithm in 1-dimension over the set of labels. We are often able to find optimal sparse trees in seconds, even for challenging datasets that involve large numbers of samples and highly-correlated features.
translated by 谷歌翻译
给定数千种同样准确的机器学习(ML)模型,用户如何在其中选择?最近的ML技术使领域专家和数据科学家能够为稀疏决策树生成完整的Rashomon设置,这是一套几乎最理想的可解释的ML模型。为了帮助ML从业者识别具有此Rashomon集合中理想属性的模型,我们开发了Timbertrek,这是第一个交互式可视化系统,该系统总结了数千个稀疏决策树的规模。两种用法方案突出了Timbertrek如何使用户能够轻松探索,比较和策划与域知识和价值观保持一致的模型。我们的开源工具直接在用户的计算笔记本和Web浏览器中运行,从而降低了创建更负责任的ML模型的障碍。Timbertrek可在以下公共演示链接中获得:https://poloclub.github.io/timbertrek。
translated by 谷歌翻译
在任何给定的机器学习问题中,可能有许多模型可以很好地解释数据。但是,大多数学习算法仅返回这些模型中的一种,使从业者没有实用的方法来探索替代模型,这些模型可能具有超出损失函数中可以表达的内容的理想属性。 Rashomon集是所有这些几乎最佳模型的集合。 Rashomon集可能非常复杂,尤其是对于高度非线性功能类,允许复杂的交互项,例如决策树。我们提供了第一种完全列举稀疏决策树的Rashomon设置的技术;实际上,我们的工作提供了针对高度非线性离散功能类别的非平凡问题的所有Rashomon设置的首次列举。这使用户可以在所有近似同样好的模型中对模型选择的前所未有的控制水平。我们在专门的数据结构中表示Rashomon集,该数据结构支持有效的查询和采样。我们显示了Rashomon集的三个应用:1)它可用于研究一组几乎最佳树的重要性(与一棵树相对),2)Rashomon设置的精确度使Rashomon集可以枚举Rashomon集合。平衡的精度和F1得分,以及3)完整数据集的Rashomon集可以用于生产仅使用数据集的子集构建的Rashomon集。因此,我们能够检查新镜头问题的Rashomon集合,使用户能够选择模型,而不是受到仅产生单个模型的算法的摆布。
translated by 谷歌翻译
许多最新的自然语言任务方法都建立在大型语言模型的非凡能力上。大型语言模型可以执行内在的学习,他们可以从几个任务演示中学习新任务,而无需任何参数更新。这项工作研究了对新自然语言任务的数据集创建数据集的含义。与最近的文化学习方法背道而驰,我们制定了一个注释效率的两步框架:选择性注释,选择一个示例池,以提前从未标记的数据中从未标记的数据中进行注释,然后及时检索从注释的池中检索任务示例测试时间。基于此框架,我们提出了一种无监督的,基于图的选择性注释方法VOKE-K,以选择各种代表性的示例进行注释。在10个数据集上进行了广泛的实验(涵盖分类,常识性推理,对话和文本/代码生成)表明,我们的选择性注释方法通过很大的利润提高了任务性能。与随机选择示例进行注释相比,Pote-K平均在注释预算下获得了12.9%/11.4%的相对增益。与最先进的监督登录方法相比,它的性能相似,而在10个任务中的注释成本降低了10-100倍。我们在各种情况下进一步分析了框架的有效性:具有不同大小的语言模型,替代选择性注释方法以及有测试数据域移动的情况。我们希望我们的研究将作为数据注释的基础,因为大型语言模型越来越多地应用于新任务。我们的代码可在https://github.com/hkunlp/icl-selactive-annotation上找到。
translated by 谷歌翻译
由于复杂的注意机制和模型设计,大多数现有的视觉变压器(VIT)无法在现实的工业部署方案中的卷积神经网络(CNN)高效,例如张力和coreml。这提出了一个独特的挑战:可以设计视觉神经网络以与CNN一样快地推断并表现强大吗?最近的作品试图设计CNN-Transformer混合体系结构来解决这个问题,但是这些作品的整体性能远非令人满意。为了结束这些结束,我们提出了下一代视觉变压器,以在现实的工业场景中有效部署,即下一步,从延迟/准确性权衡的角度来看,它在CNN和VIT上占主导地位。在这项工作中,下一个卷积块(NCB)和下一个变压器块(NTB)分别开发出用于使用部署友好机制捕获本地和全球信息。然后,下一个混合策略(NHS)旨在将NCB和NTB堆叠在有效的混合范式中,从而提高了各种下游任务中的性能。广泛的实验表明,在各种视觉任务方面的延迟/准确性权衡方面,下一个VIT明显优于现有的CNN,VIT和CNN转换混合体系结构。在Tensorrt上,在可可检测上,Next-Vit超过5.4 MAP(从40.4到45.8),在类似延迟下,ADE20K细分的8.2%MIOU(从38.8%到47.0%)。同时,它可以与CSWIN达到可比的性能,而推理速度则以3.6倍的速度加速。在COREML上,在类似的延迟下,在COCO检测上,下一步超过了可可检测的4.6 MAP(从42.6到47.2),ADE20K分割的3.5%MIOU(从45.2%到48.7%)。代码将最近发布。
translated by 谷歌翻译
大型未标记语料库上的预训练的变压器语言模型已产生了最新的最先进的结果,从而导致了自然语言处理,有机分子设计和蛋白质序列的产生。但是,尚未应用这种模型来学习无机材料的组成模式。在这里,我们使用在ICSD,OQMD中存放的材料和材料项目数据库中扩展的公式培训了七种现代变压器模型(GPT,GPT-2,GPT-2,GPT-NEO,GPT-NEO,GPT-J,BLMM,BART和ROBERTA) 。六个不同的数据集,具有/输出非电荷 - 中性或平衡的电负性样品用于对性能进行基准测试,并发现现代变压器模型的产生偏见,以生成材料组成的生成设计。我们的广泛实验表明,基于因果语言模型的材料变形金刚可以产生高达97.54 \%的化学有效材料组合物,即充电中性,而91.40 \%的电负性平衡,与基线相比,它的富集高6倍以上伪随机抽样算法。这些模型还表现出了很高的新颖性,并且它们在新材料发现中的潜力已经证明了它们的能力恢复了留出的材料。我们还发现,可以通过使用精选的训练集(例如高带盖材料)训练模型来量身定制生成的样品的性能。我们的实验还表明,不同模型在生成样品的属性方面都有自己的喜好,并且其运行时间复杂性差异很大。我们已经应用了材料变压器模型来发现一套使用DFT计算验证的新材料。
translated by 谷歌翻译